Comments provoked by Dr. Ben Goertzel's web site: "Hyperrealism" and "The Miraculous Mind Attractor".

I am reminded of Minsky's The Turing Option

and George Dyson's Darwin among the machines. I share Ben' intuition about the close relationship between spirituality and complex systems. I have not had a chance to read the book "The Evolving Mind", but it is clear that Ben knows of the work of Gerald Edelman. Ben also cites "Dancing Fitness Landscapes and Coevolution to the Edge of Chaos" but I wonder if he is also familiar with Stuart Kauffman's book "The Origins of Order" in which Kauffman repeatedly makes the point that complex adaptive systems rely on a "hierarchy" of mechanisms in order to "tune their response" to their complex environments.

In my study the mechanisms by which biological brains store memories, I heve developed the belief that it is a "hierarchy" (some type of flexible set) of molecular memory mechanisms embedded in specific types of neural networks that allow our brains to model a complex environment. Biological evolution has given us 1,000,000,000 years of trial and error development of useful ways of embedding molecular memory mechanisms in network modules which are in turn embedded in global brain networks (I like the term "dual network structure") in such a way as to provide associative memory. My intuition tells me that it is foolish for anyone who is interested in mind to ignore what has been "learned" through this vast evolutionary "experience" which has searched a huge state space and found workable mind attractors.

As a biologist, I wonder computer scientists with an interest in the mind and exploring it "using tools from computing, mathematics, philosophy and psychology" csan view this as a broad enough endeavor to include an interest in the molecular and cellular mechanisms used by biological brains to produce minds. I know that many AI researchers and philosophers are not interested in the mechanical details of "wet brains". I think study of mind will work best when we unite philosophy, computing, and biology.

Ben and I agree that, Òthe essence of mind lies in abstract emergent structures/dynamics rather than in any particular mechanisms.Ó I have no doubt that it will be possible for AI researchers to construct minds in man-made robotic systems by means of mechanisms that are different from those used in biological brains, just as we build airplanes that are able to fly using different mechanical components than one finds in birds. I would argue that the Òessence of flightÓ is the production of adequate lift by a mobile, self-powered mechanism. The creation of man-made flying machines required the assembly of the correct combination of propellers, engine components, fuel, wings, etc. I agree that the ÒessenceÓ of mind is a complex dynamic system which can produce a complex adaptive response to a complex environment. The creation of man-made minds will require the assembly of the correct components.

I think that AI research has demonstrated that constructing minds is a hard problem. I believe that AI researchers are making their task hard by not attempting to include come key components in their mind machines. I view existing attempts at AI as similar to attempts to construct airplanes before the invention of the internal combustion engine. My argument is that since biological brains contain a functioning instantiation of a mind machine, then we should look closely at biological brains in order to see what tricks they use to construct complex adaptive systems. I in no way deny that AI researchers should be able to make a better mind using different specific tricks than those used in biological brains, but they may benefit from including the same types of tricks. It is clear that birds have a compact ÒengineÓ to power their flight which is muscle tissue. It seems obvious to us now that useful man-made airplanes need compact and powerful engines and that people can build man-made engines that are far superior to muscles. However, people spent much effort trying to make aircraft before internal combustion engines were available. Maybe biology can reveal to us the equivalent of the internal combustion engine that we need in order to get AI off the ground.

If the bird/airplane analogy is valid, then what should we expect to be able to learn from the study of biological brains? I fully accept the two key ideas of AI research: 1) minds must have ÒemergentÓ high level functions such as associative memory systems and Òcomputational algorithmsÓ. 2) minds must constructed from a large number of some type of low level components that can function as interacting distributed processors. Of course, many AI researchers are only concerned with algorithms and see no need to even bother thinking about the mechanical implementation of their algorithms, they are content to use whatever tools are inside the computers they program. To me, this is like trying to construct aircraft with whatever parts are lying around the bicycle shop, without being concerned to find the most powerful and light weight engine that is available. I think AI research can benefit by being more open minded about the nature of the low level components, and this is obviously where we should expect neuroscientists to be able to offer some insights into the low level components used by biological brains.

Fifty years ago a basic conceptualization of neuronal function and neuronal network dynamics was formulated, found to be theoretically adequate for the construction of universal Turing machines, and so most AI researchers lost interest in molecular and cellular neurobiology. The attitude is, ÒIf we know theoretically that we can construct any desired algorithm from networks containing any of several different types of simple model ÔneuronsÕ, then why should we concern ourselves with the baroque complexity of biological neurons?Ó Thus, even those AI researchers who have attempted to focus their efforts on machine learning and issues like Òtrainable neural networksÓ, are generally uninterested in making their model neural networks Òbiologically realisticÓ. There has been some interest in ÒgeneticÓ or ÒevolutionaryÓ approaches within AI, but these methods ignore the details of brain biology and simple emphasize general rules for searching complex state spaces by means of non-biological algorithms.

I am interested in Gerald EdelmanÕs ÒNeural DarwinismÓ approach to mind for several reasons, the least of which is his emphasis on the power of neuronal group selection (I would classify it as over-emphasis, to the exclusion of other mechanisms). More important is his emphasis on global brain functions, which makes his theoretical approach directly applicable to robotics. Most important is the approach taken by EdelmanÕs research team as illustrated in Chapter 7 of his book, ÒNeural DarwinismÓ. I know that most biologists cannot read and understand this chapter, simply because Edelman throws in a few differential equations. I know that there are AI researchers who have read Chapter 7 and fallen asleep because they have long ago constructed much more sophisticated neural network models than Edelman discusses. However, look at what Edelman does in Chapter 7. He pulls out of his imagination two simple (yet biologically motivated) rules for synaptic strength modification (his pre- and post-synaptic rules). These rules are then embedded in a specific type (biologically motivated) of network of neuronal groups. Computer-based simulations were then done, such as that illustrated in Figure 7.5 of ÒNeural DarwinismÓ, which is a proof of concept that biologically plausible molecular mechanisms (embedded in the synapse modification rules) when placed in the context of a biologically plausible network architecture (in this case a sensory ÒmapÓ composed of a large number of neuronal groups) is able to simulate an important Òhigher levelÓ phenomenon of biological brain function, namely, the experimentally observed plasticity of somatosensory maps. For Edelman, this is just the starting point for his theory that such synapse modification rules can be embedded in Òglobal mapsÓ where they can provide for associative memory and complex adaptive behavior. Edelman is a biologist, interested in explaining complex animal behavior in terms of brain physiology. But what is the lesson for AI researchers?

The lesson is not in any of the details of EdelmanÕs model such as the particular differential equations used or the particular network architecture employed. We know that Edelman either over-simplified all of these details or was simply wrong about them. The important lesson is the style of the modeling approach and the fact that EdelmanÕs approach is easily extended to more complex models, whether founded on better understanding of biological brains or constructed from scratch in AI models. This extendibility of EdelmanÕs approach is discussed at the end of Chapter 7 under the heading ÒTransmitter LogicÓ, where Edelman mentions the fact that the ability of a brain (or mind) to have a Òrich context-dependent natureÓ (translating this into the modern language used to discuss complex adaptive systems, we would say Òthe ability of a brain to be a complex adaptive systemÓ) is dependent on the richness (diversity) of the biochemical mechanisms for synapse modification which are available as a sort of tool kit in organisms with brains. This diverse collection of synaptic strength modification rules can be Òplugged intoÓ the similarly rich repertoire of neuronal network architectures which are constructed during embryo genesis. This richness in the network ÒtricksÓ available to biological brains is the key idea.

ÒHow do molecular memory mechanisms interact with neural net dynamics as conventionally studied?Ó As discussed above, Edelman described examples of how his proposed molecular memory mechanisms (which he mathematically encoded into his synapse modification rules) produce neural net dynamics which allowed for simulation of a particular experimentally well-studied aspect of Òhigh-levelÓ brain function, namely, environmentally induced plasticity in somatosensory cortex function. It is clear that every specific combination of synapse modification rules and network architectures will have its own unique functional dynamics. The point is, that biology suggests that it is useful for a brain to have a rich Òtool kitÓ containing a diverse collection of synapse modification rules (one particular important aspect of this diversity is the need for a hierarchy of mechanisms from [quick and temporary] to [slow-to-form and long-lasting]) and a diverse collection of sub-network architectures. Basically, brains are designed to have a diversity of ways of responding and this diversity allows for efficient responses to complex environments. There is an astronomical number of possible network architectures, but biological evolution has selected a stable and effective set of architectures which we now find in our brains. It may be true that some of these architectures are near optimal for certain tasks such as visual perception. It may be that AI researchers should learn the details of particular biological networks and then translate the details of these architectures into computer implementations. However, that strikes me as being similar to powering airplanes by human muscle power. It is more likely that the key thing to be learned from brains is the general idea of utilization of diverse synapse modification rules and diverse network architectures. While it my be theoretically possible to do every brain task with one all-purpose mechanism, it is clearly advantageous for biological brains to apply a diverse tool kit and let particular tricks deal with particular tasks. I think this diverse tool kit is the basis of our amazing capacity to learn from a complex environment, to be programmed by a complex environment, to respond to a complex environment by complex adaptive responses.

analogy to computer languages...you could do everything in machine language, but why not use special tricks and Òhigher levelÓ languages when available to simplify jobs? Analogy in mathematics...you can solve equations Òthe hard wayÓ but sometimes just doing a transformation (say a Laplace transform) can simplify a problem. Sure, we can make all possible network dynamics using one kind of ÒstupidÓ model neuron, but why not take advantage of more complex model neurons and synapse modification rules?

Sure, maybe some day the brilliant graduate student will come along who will just sit down and compose a mind algorithm in LISP or an ÒevolutionaryÓ learning algorithm in JAVA that will be released into the Internet where it will learn intelligent behavior, but IÕm not holding my breath. I think our goal of trying to understand mind would be better served by closer cooperation between biologists and AI researchers. I know biology would benefit if more computer programmers made biologically motivated network models, it would help make sense of all the data we are accumulating about the brain. I suspect that AI researchers would have an easier time creating complex adaptive systems if they used more of the tricks that living organisms have discovered during the past 1,000,000,000 years.

More on the analogy: making intelligent artifacts is like making mechanical flying machines.....we really should take to time to understand how biological systems have solved the problem, even if we do not want to make an airplane with feathers.

I never hope to live or die by analogy, but.......
The ÒcorrectÓ solution to mechanical flight was to take only a minimum amount of information from observation of birds (the need for a compact, light weight power source) and then figure out how to use available technology to make a machine fly. This scientific solution was better than the prescientific idea that certain substances or ÒformsÓ are able to naturally rise or sink according to their essential properties. Thus, the Icarus legend in which bird-like wings give men the power of flight.

What about mechanical minds? Most AI researchers seem to agree on the usefulness of a hierarchy of nodes joined by various types of links to create ÒgroupsÓ which are further connected  (if we are interested in constructing something with the Òspecialization/generalization balance found in human minds, I would insist that we also need a diversity of methods for linking the groups) to form a global network. "All" we need to do is figure out how to make such a network that is a spontaneously adaptive system. Hmmm...this is not easy!

ÒSo you advocate building specialized AI hardware?Ó
I am always interested in the possibility of constructing powerful tools, but it is the job of craftsmen to select among the available tools in order to best complete a particular job. Maybe clever computer programs can do most of the work from inside general purpose computers, but why not take full advantage of special purpose robotic constructs?

Òthe Webmind architectureÓ described at Ben's web site is said to have Ògeneral intuitive understandingÓ of data on the internet and the interests and responses of human users. This sounds like an expert system, a type of data manipulation prosthetic. This is useful technology, but we must face the standard put-down of all artificial intelligence research: ÒtrueÓ intelligence is what still remains to be found in those problems that AI has not yet solved.

Any complex adaptive system comes to obtain semantic content that is only as powerful as the environment with which it interacts. Until we challenge our artificial intelligences with the diversity of environmental challenges that confront a human infant, we are not going to produce an artificial construct with anything like a human mind. For example, it has been suggested that the cerebellum evolved as a special computing device for storing patterns that allow for smooth coordination of muscle contractions in behaving animals. This basic computational power was later utilized for additional tasks such as the coordination of patterns of abstract thought. It may be that until we build a machine that can match the primate capacity for motor coordination, we will not have a machine powerful enough to maintain human abstract reasoning and human language. It may be that until we build a machine that can match the primate visual systemÕs ability to detect and categorize patterns, we will not have a machine powerful enough to mantain human abstract reasoning and human language. Alternatively, some JAVA-based system may be powerful enough in theory, but simply fail to be able to be usefully applied to tasks involving human language understanding and communication, for the same reason that a child who never is exposed to humans using language will fail to use language. A human child kept in a sensory deprivation tank and only fed a textual data stream would also fail to develop a human mind.

Are there general lessons like ÔdiversityÕ and Ôspecialization/generalization balanceÕ that can be transferred from biology to AI? Should we imagine that there are specific things like 'memory mechanisms' that can be transferred into AI research as they are understood in biological brains?.  If biologists could clearly describe the network that allows the visual cortex and associated brain regions like the thalamus to function both for pattern detection and associative memory then this network architecture could be translated into computer architectures that would have the same functions. I suspect that there are about a dozen important rules of Òtransmitter logicÓ and a few dozen important rules of network connectivity that biological brains use to make such Òdual networksÓ. These particular rules that solve the problem have been selected by evolutionary processes from a huge state space of possible network architectures. I have no doubt that by trial and error computer programmers can discover additional solutions to this problem. However, it is my guess that computer programmers could benefit by paying attention to how brains have solved the problem. Of course, this would involve computer programmers and biologists actually cooperating with each other.

What can an AI researcher expect to learn by paying more attention to how brains work?

The molecular mechanisms of synapse modification are of importance to global brain function only from the perspective of the details of how those mechanisms are embedded in specific network architectures. However, most of the work that is done to identify the mechanisms of synapse modification are done independent of attempts to study neuronal networks.

Transmitter Logic
There are a few standard ways of classifying the molecular mechanisms that brains use to modify synaptic connections:
1) according to the rapidity with which the changes in synaptic function are induced
2) according to how long lasting the changes in synaptic function are
3) based on the spatial range over which the changes in synaptic function can be induced
4) based on the amount of dependence of a change in synaptic function on what is going on at other synapses
5) according to the excitatory or inhibitory effects of the mechanism

Electrophysiologists have thus identified and classified basic mechanisms of inhibition and excitation, post-tetanic potentiation, post-tetanic depression, habituation, sensitization, long-term potentiation (LTP), long-term depression (LTD), and various types of heterosynaptic facilitation. Although these dozen or so synapse modification mechanisms are often studied independently, they are most interesting when considered in terms of how they are related during neural network activity. How do short-term synaptic responses sometimes fade away while in other cases their effects linger on by means of activation of long-term response mechanisms? Not only do local synaptic modifications alter global network function, but distant events in networks can come to control local synaptic responses.

There are a few popular systems for investigation of how various rules of Òtransmitter logicÓ can productively be embedded in specific network architectures. EdelmanÕs book ÒNeural DarwinismÓ deals with a model of somatosensory cortex that includes two rules for synaptic modification. CrickÕs book ÒThe Astonishing HypothesisÓ deals with a model of ÒbindingÓ which involves a prominent role for post-tetanic potentiation within the cortical maps of the visual cortex. KandelÕs group has studied molecular mechanisms for habituation, sensitization, and heterosynaptic facilitation in the context of the sensory neuron/interneuron/motor neuron pathways of Aplysia. Many models of cerbellar network function have been proposed which incorporate mechanisms such as LTP and LTD (for example, James HoukÕs group 1996 Brain and Behav. Sci. 19:368). Many models have been proposed that attempt to explain hippocampal functions (such as the formation of Òplace cellsÓ) in terms of the known network structure of the hippocampus and well studied hippocampal synapse modification mechanisms such as LTP.

Some of the common network architectural features of such models include local inhibitory feedback, positive feedback loops between cortical maps and between cortical maps and thalamic maps, topography preserving mappings between brain regions, and Òmirror imageÓ mappings between adjacent cortical maps. Of course, there are many other brain region-specific features such as climbing fibers in the cerebellum. Unfortunately, most people who make biologically motivated network models are not concerned with models of global brain function. Most models never directly relate their results to complex animal behavior. Most biologists still feel that it is hard enough to model individual brain regions; attempts to model global brain functions (behavior) are too hard. Edelman outlined a way to deal with global brain functions in terms of network models. I think we need more people who are willing to take on this challenge. In particular, we need to stop avoiding the problem of making learning machines that can learn in the same way people learn.

There is no short-cut method that allows us to put the semantic content of human language into a machine. I agree that Òsymbol groundingÓ must be accomplished through experience, but it cannot be only through linguistic experience. Most of human language semantics is derived from non-linguistic sources. This semantic content of language must be learned by non-linguistic means, through the interaction of a person (or robotic agent) with a complex human social environment. A genetic algorithm which only sifts numeric and textual data by means of ÒgoodÓ and ÒbadÓ evaluations of performance can only be as good as the initial bag of tricks (agents) that are provided by the programmer, unless you are willing to wait for ÒmutationsÓ to improve those agents.

I like the idea of viewing Minsky-type agents as neuronal groups in order to construct a society of mind. The key problem becomes identifying the types of agents that are needed to create the desired system dynamics. However, if LISP or C or JAVA programming is the way to ÒidentifyÓ these required agents, then I think we are in trouble. Given such a restriction, all you do is make expert systems built from agents that you know how to design for dealing with specific problems (data mining, market trend forecasting).

What are the sources of experimental data about the mechanisms of biological memory systems?
There are two experimental techniques that are used to investigate the distribution of the various synapse modification mechanisms. Originally, electrophysiology was the only means; record the signals passing across synapses and see how those signal strengths change as a function of the activity of the synapses. Due to technical limitations, this is a rather slow and cumbersome method, but best when available. As the synapse modification mechanisms have become understood in terms of the protein molecules which cause the electophysiological changes, it has become possible to apply the techniques of molecular and cellular biology to this problem. For example, a well-studied form of LTP involves the NMDA receptor. It is possible to use antibodies that are specific for the protein subunits of NMDA receptors to determine the distribution of NMDA receptors in the brain. This can be done on complete tissue cross sections, quickly providing a map of the brain regions containing NMDA receptors. The antibody will bind specifically to the target protein and can be easily visualized by attaching to the antibody either a fluorescent molecule or an electron dense granule (like a gold particle). Confocal and electron microscopy can be used to detect NMDA receptors in individual synapses. Another technique which is becoming popular involves genetic engineering and attachment of the DNA sequence of Green Fluorescent Protein (GFP) to the gene sequence of a protein of interest (such as a subunit of the NMDA receptor. The NMDA receptor protein is then easy to detect by fluorescence microscopy without the need for an antibody. The bottom line is that each protein molecule that is responsible for the various synapse modification mechanisms has its own unique distribution in the brain. The distributions are determined by two things. First, the production of each protein is regulated at the level of gene transcription. Different neuron types make different proteins. Second, each protein is targeted to particular locations within the cell. In the case of neurons, this may involve targeting each protein involved with synapse modification to a particular type of synapse (for example, targeting NMDA receptors to only the excitatory synapses of a cellÕs dendritic tree). One of the goals of the Human Brain Project is to establish easy to use data bases that will contain all of this kind of information. Currently, this information is spread out in little pieces throughout the neurobiology journals and is hard to access.

Can we correlate the distributions of the synapse modification mechanisms with particular brain functions like episodic memory and perception? With considerable effort such correlations can be demonstrated. Probably the most celebrated example is the role of NMDA-type glutamate receptors in a particular type of synapse in the hippocampus. It has been shown that the NMDA receptors are located in these particular synapse and are required for the production of LTP in these particular synapses and that if you block the normal function of these receptors then you block the formation of new episodic memories. Glutamate is one of the major excitatory neurotransmitters in the brain. There are three main types of glutamate receptors. The AMPA/KA receptors are important for the very rapid excitatory response of cells to glutamate. Most glutamate synapses have NMDA receptors, and LTP is found in many parts of the brain. Some glutamate synapses contain a third class of glutamate receptor, mGlu receptors. These can sometimes (for example, in a particular type of synapse in the cerebellum) be coupled to other synaptic proteins (such as calcium ion channels) in order to cause LTD. This LTD in the cerebellum is required for the learning of tasks involving adaptation of consciously produced muscle contraction strengths to changes in the environment. In the case of perception, Francis CrickÕs book ÒThe Astonishing HypothesisÓ is worth reading. CrickÕs book is all about visual awareness, what happens within about 100-200 milliseconds after you open your eyes and focus your conscious attention on some aspect of the visual scene. At the end of chapter 17, Crick talks about to possible role of post-tetanic potentiation (PTP) in visual awareness. He suggests that it could be particularly important in layer six of the primary visual cortex where there are neurons that reciprocally project back to the visual part of the thalamus. Crick suggests that a type of positive feedback between the thalamus and the primary visual cortex may depend on PTP and be involved in the restriction of our conscious awareness and attention to particular subsets of the totality of visual input. The elements of visual input that are less relevant to the organism will not establish the positive feedback loop, and will simply fade from consciousness. At this point Crick makes an interesting comment:

ÒUnfortunately, little work is being done on these transient changes [PTP], mainly because long-term changes in synaptic strength [like LTP]- a very hot topic at the moment- are easier to study. Nor have they been allowed for in most theoretical work on neural networks.Ó

Dependence of change in a synapseÕs function on what is going on at other synapses......
The ÒclassicÓ example of this is for the ÒassociativityÓ of NMDA receptor-dependent LTP in the hippocampus. LTP can only be induced in these hippocampal synapses in an associative or cooperative way. Activation of single synapses cannot lead to LTP. Several synapses onto the same part of a dendrite must be active at about the same time in order for LTP to be induced. The reason is that the membrane potential of the dendrite must be reduced by a fairly large amount (an amount larger than can be achieved by activating any single synapse) before the NMDA receptors become activated. In other words, activation of NMDA receptors and induction of LTP at any given synapse requires both activation of a presynaptic axon terminal and activation (by glutamate acting through AMPA/KA receptors) of the dendrite of the postsynaptic cell by several presynaptic terminals. The induction of LTD in the cerebellum is also proposed to be heterosynaptic: activation of synapses onto cerebellar Purkinje cells from Òclimbing fibersÓ (which carry signals originating in distant locations such as the cerebral cortex) is able to induce LTD at the synapses onto cerebellar Purkinje cells from granule cells (which mainly carry signals from the brain stem and the spinal cord). Thus, the climbing fiber inputs seem to be able to block inputs from the granule cells, even though they both release excitatory neurotransmitters onto the Purkinje cells. The well-studied case of classical conditioning in Aplysia involves synapses from interneurons that are carrying pain information. These interneurons form synapses on the presynaptic axon termini of sensory neurons which carry tactile information. Paired activation of the pain pathway and the tactile pathway results in potentiation of future responses to signals in the tactile pathway alone. These are all examples where network architecture is critical for the desired regulation of synaptic function. In other words, the molecular mechanisms that regulate synaptic function make no physiological sense unless they are embedded in the correct type of network. For example, it is important that the pain axons synapse onto the tactile axons, not the other way around and it is important that the climbing fibers each make large numbers of synaptic contacts on a single cerebellar Purkinje cell so that activation of a climbing fiber can efficiently inactivate the thousands of granule cell synapses on that Purkinje cellÕs huge dendritic tree

The rapidity of changes at synapses can be from fractions of a millisecond to hours, although usually if a slow-to-appear change is going to happen, it will often be preceded by other faster acting and short-term changes that are important intermediaries in producing the slow-to-appear but long-term change. Some forms of PTP may only last a few milliseconds, others several tens of milliseconds. Very long-term forms of modulation of synapses have been experimentally observed to last weeks. Some functional changes at synapses involve the destruction of synapses, the alteration of synapse size or shape, or the creation of new synapses. Presumably, some of these changes could last the life of an animal. Some synaptic changes are thought to be only dependent on the history of activity at the single synapse being modified. There are known or proposed mechanisms for closely coupling changes at two, a few, many, and very many synapses. Signals like those provided by nitric oxide can spread from many individual sources (activated synapses) in a neural network and over-lapping pulses of NO can presumably modulate large populations of synapses. Some brain chemicals like dopamine are thought to be able to function as neuromodulatory signals that can be produced by a distributed network of axons or by a particular local collection of neurons from which they spread over large distances to set a ÒtoneÓ throughout a large brain region. If you are looking for diversity and fractal scaling, it is there. A nicotine buzz can heighten awareness over the entire cerebrum, just as a release of adrenalin can cause a global alteration in consciousness. Dopamine from the substantia nigra can set the proper ÒtoneÓ throughout large parts of the stiatum and globus pallidus. Activation of groups of cells in the amygdala (during an emotional response) may be able to re-program the hippocampus to enhance or otherwise modulate formation of episodic memories.

There is nothing magic about any of the molecular mechanisms for modulating synapse function in brains. What does seem to be important is the diversity of mechanisms that are available and how they are integrated into specific (and again, diverse) network architectures. During the past 10 years we have gotten our first glimpse at how synapse modification rules embedded in particular networks can account for higher-level brain functions, and in a few rare cases, actual (all-be-it simple) animal behavior. Much work remains to be done to extend this style of model construction and testing to the most interesting aspects of animal behavior. My guess is that even artificial intelligence researchers (who are not interested in how biological brains work) might benefit from incorporating the general types of tricks used in biological brains into their network models. However, it is clear that embedding a diverse collection of synapse modification mechanisms in a diverse collection of network architectures will not automatically give you what we know as human intelligence. Insects have a diverse collection of synapse modification mechanisms embedded in a diverse collection of network architectures. They can learn, form memories, adapt to their environment, but only for a very narrow range of environmental influences......their data input stream is very limited. If we are interested in human-like intelligence we need to challenge our artificial intelligences with the same diversity of interactions with a complex environment that a human infant experiences. Although we can use human language to exchange symbols which allow us to activate rich collections of semantic content inside each others brains, the actual information content of those symbols is very low. Most of what we know about the world is learned by non-linguistic means.



Go to John's Book Page.

Go to John's Home Page.



send email to:
John William Schmidt