by
TGV
Timestamp: 08.04.2004, 20:24 GMT
Code nature: Object Pascal
Development environment: Delphi 6 Personal IDE
Target hardware platform: Intel x86 32-bit
Target platform: Win32
Target OS: Windows XP
5. Scope (update pending...)
6. Goals (update pending...)
7. Usage (update pending...)
8. Known issues or "a work in progress" (update pending...)
NeuroLab is an application designed to create and manage artificial neural networks (ANNs).
Hardware: theoreticaly, since it does not use hardware accelerated graphics nor does it require huge processing power in study (i.e. not time critical performance) mode and the executable and configuration files are small (approx. 2 MB), NeuroLab should run on almost any hardware configuration capable of running a variant of Microsoft (TM) Windows (R) operating system (OS). Nevertheless, especially since the program's memory usage can become quite high and the GUI requires some display space, the recommended hardware equipment would include (the parenthesis enclose the equipment used for development and testing):
- an Intel or compatible x86 CPU, 1000 MHz or above (AMD (TM) Duron (R) ~1200 MHz on a VIA KT333 based EPOX (TM) 8K5A motherboard)
- 128 MB or above system RAM (640 MB DDR PC2100)
- 2 MB free hard drive space (executables and configuration files ONLY), plus up to several GB for ANN file saving (120 MB + 20 MB UDMA 133 Maxtor (TM) hard drives); hard drive performance becomes critical only when saving/opening files or (and especially) when using the swap file
- DirectX compatible, 1024x768 capable video card, 3D accelerator not necessary
- 1024x768 capable display
- network adapters, soundcards or other communication and multimedia devices are not necessary
Software: NeuroLab is a Win32 platform based application that requires a variant of Microsoft (TM) Windows (R) OS in order to run. This version (2.0) was tested ONLY on OSs using a Windows NT kernel, namely on Windows XP Professional, therefore it has a good chance of not working on earlier versions of Windows (R). Apart from the environment provided by the OS, NeuroLab does not need any other software in order to function (i.e. third party libraries or applications).
In NeuroLab I redefine (:-) a few notions from the area of artificial intelligence (or I create some new ones) in a very pragmatic way that corresponds to the needs of my artificial intelligence paradigm - a nature-like artificial intelligence system.
Artificial intelligence system (AIS): an independent, possibly self sustained (from the informational point of view) structured conglomerate, highly fault-tolerant, operating perceptively, self regulated by means of internal and external feed-back/feed-forward pathways, using a non-coherent, yet systematical internal language, having no apparent purpose except self perpetuating continuous functioning, outputting data according to its own needs in the respect of serving its only purpose. This here model uses an analogy (meant accurate) with the human nervous system in the attempt to emulate such a conglomerate.
AIS - human intelligence interface: the common ground used for information exchange (input/output) between these two; the AIS would have to learn how to output through this filter and it would have to be taught how by being stimulated through it.
Soma unit (SU): the elementary morphological unit of the AIS, it represents its level 0 entity, being indivisible and having constant characteristics as a structure throughout the AIS (although its contents and momentary parameters will vary). "Soma" means 'body" and it denominates the neuron body. A "soma unit" emulates the smallest, functionally indivisible zone on the neural membrane.
Soma unit engine (SUE): it should be defined along with the soma unit, because it is the elementary (level 0) functional unit of the AIS; it operates on the soma unit, manipulating its contents and parameters. In programming terms, if the soma unit is the data, the soma unit engine is the code. This engine can be flexible and very customizable, in opposition with the fixed data structure of the soma unit, i.e. the density of calcium ion pumps on a zone of the neural soma can be emulated by modifying the operating parameters of the SUE, which would reflect in the contents of the SU. The SU has a single state of equilibrium (inactivity) and several activation states. The SUE is what "cycles" the SU through this various states, according to its on internal rules.
Soma: as stated above, soma is the neural body, the level 1 AIS entity; it emulates the neural body by being a bi-dimensional matrix of SUs; it could be thought of as an unfolded neural soma membrane; all SUs in a soma are identical and are manipulated by a single SUE, just as the single cell that is a neuron has but one metabolism and behavior in every given conditions. By integrating SUs and the SUE, the soma becomes a morpho-functional entity.
Axon: another level 1 morpho-functional entity, the axon emulates the corresponding neural part; it is a "specialized" soma, consisting in an array of SUs (not a matrix), each manipulated by a single SUE, as in the soma, with whom it closely resembles in the intimate functional mechanisms.
Terminal: a level 1 morphological AIS entity, it emulates the terminal button of a neuron's axon, containing the information of a connection to another AIS "stimulable" entity (that is another neuron, see below).
Terminal array (TA): a level 1 morpho-functional AIS entity, integrating an array of terminals
Stimulus: in the human physiology (and not only human) it represents a variation in the quantity (or quality) of an energy form, that raises a response in the living structure that perceives this variation. In this model, a stimulus represents an information variation (regardless of the nature of information) enforced on the AIS, that might or might not determine a response from its part, depending on the stimulus' and the AIS' properties. A stimulus that "moves" the AIS in a "habitual" way will be called an optimal stimulus. A stimulus that "overloads" the AIS will be called an supraliminal stimulus. A stimulus that "under-stimulates" the AIS, generating but a limited response will be called a subliminal stimulus. The SU, SUE, soma, axon, but not terminals, are all entities dealing with stimuli.
Impulse: it is the internal "stimulus", the result of stimulation; it is the carrier of information throughout the AIS. Aside from the SU, SUE, soma and axon, the terminals also deal with impulses (they cannot be stimulated directly - externally, that is).
Propagation: the process of carrying the impulses throughout the AIS. The soma and axon entities are able to get stimulated and to propagate the consequent impulses according to the processing performed by their SUEs. Propagation ends when and if all SUs in the soma/axon entity reach the state of equilibrium. In terms of propagation, both soma and axon introduce the notion of "exit point": the soma exit points are represented by the matrix' frame (imagine that unfolded neural membrane again; its contour would be the line where soma ends and the axon cone begins); the axon's exit point is the last element of its array; bidirectional axons can be implemented, with exit points at both ends of the array. The axon end means the beginning of the TA. Given that the only purpose of an AIS is continuous functioning, the AIS will seek to always have impulses propagating throughout it.
Neuron: by integrating one soma, one axon and one TA, it becomes the level 2 entity of the AIS. In the current implementation, although the soma and the axon benefit from the use of separate SUEs, those operate by a single set of parameters, again according to the "single cell" "theory". The neuron is able to get stimulated, propagate the result of the stimulus (in the form of impulses) throughout the soma matrix and along the axon to the TA. The soma and the axon work as they would independently, except for the unique SUE operating parameter set. The neuron entity only ensures serialization of the process, as well as the connection between soma, axon and TA, i.e. an impulse reaching a soma exit point becomes a "stimulus" for the axon and a stimulus reaching the unidirectional axon's exit point will be sent to the TA and then further to other neurons.
Connection: a bridge between any two neurons or between a neuron and itself (self connected neurons are a possible variant); a connection implies a terminal (it contains the connection information) that belongs to the efferent neuron, a "stimulable" AIS entity integrated into a neuron (the afferent neuron) - soma or axon and the topographic information regarding the afferent structure, i.e. "Neuron1", through "Terminal 215", connects with "Neuron3" on its soma, at coordinates 110,23 on the soma matrix. I did not find it opportune to emulate the neural dendrites in this relation.
Artificial neural network (ANN): it represents a heterogeneous collection of neurons (that can differ from each other in every aspect, such as size, SUE parameters, number of afferent/efferent connections etc.), independent or interconnected in any way; this form of integration ensures the serialization of the propagation process (propagation in every single neuron is synchronized with all others; no single neuron functioning can hold the rest of the ANN). The ANN also manages the connections between the neurons it contains. It would be able to create new connections (or clear old ones) as a form of adapting. In short, the ANN is the level 3 (topmost level) entity of the AIS, in complete control over all inferior level entities it contains, in a hierarchical fashion.
The practical way of implementing the above may look like an easy path to walk. It is not.
The main practical difficulties are generated by the need of attaining not only a nature-like AIS, but an AIS model that can be experimented with in ways we (for the moment) do not fully acknowledge in the living world, i.e. neural growth in the adult, neural multiplication, a model that would be subject to real-time (in-process) manipulation. Here are some:
- physical cellular growth and dying requires the neuron model to comply with data structures resizing on-the-fly (during propagation) as cellular activity continues during these processes; this creates special programming care with memory allocation/de-allocation.
- the randomization issue: i.e. what is the order of neighboring SUs in the propagation of an impulse from an active SU? Is it always the same? Can nature be emulated by randomly choosing every time a different order? If using some degree of randomization in such matters, will the result be reproducible? Is it worth the severe performance penalty of replacing randomization with complex probabilistic functions? Will the AIS be to "cold" if randomization and probability are not taken into account?
- large numbers, lots of data and user feedback: such an AIS would have to manage up to billions of neurons, the number would have to only be limited only by hardware restrictions (esp. in memory and processor power terms), not by the software implementation; the real-time user feedback, when the numbers go high, would create a huge overhead that would make the price for real-time user interaction unacceptable
- the serial/parallel processing: on uniprocessor systems, the closest one can get to parallel processing is the use of multithreading - impossible when parallel processing on millions of threads is expected and required; will serialization create too much overhead?
- the implementation's ease of maintenance, in the case of such malleable model, becomes mandatory
- etc. etc. etc.
Most of these difficulties have been overcome.
The programming was performed using a high level object oriented programming (OOP) language, namely Object Pascal as implemented by Borland (TM) in their Delphi 6.0 IDE. As a side remark (not a Borland commercial), such a task requires a powerful tool like that, because to progress in this endeavor is to always see what you're doing, and the IDE almost takes the weight of the interface building off the programmer's shoulders. Also, I cannot see how non-OOP and especially non-structured programming could accomplish this. The entire software implementation of the AIS model seen in NeuroLab is an object-oriented, event-based successful adventure. This ensures easy maintenance in every way, as well as decent performance, given the complexity of the problem. On the hardware and software platform described above, a 200x200 soma and 200 length axon array, facilitated and randomized propagation mode neuron, with a theoretical one stimulus soma processing cycles figure of 1,480,000, takes about 12 seconds to reach equilibrium (repose) after stimulation.
AIS scalability: the system can stop at the level 2 entity of the soma, which, in terms of information balancing, can be viewed as an ANN of 1x1 size neurons with no axons, and in this view an ANN would be a network of subnetworks. Anyway, scalability is obvious in this implementation of ANNs, with any ANN being able to fit into another one as a subnet.
Some intrinsic limitations have been applied to this software implementations, such as limited soma and axon sizes (these are less important considering the scalability) and SUE parameters boundaries. These were all introduced initially as an attempt to reduce nagging large memory block allocations and long propagation processing times and they can all easily be discarded with no consequence to the model, that can stand any values of these parameters (of course, the 64 bit number representation (possible on x86 32-bit platforms) limitation remains; but that means 9,223,372,036,854,775,807 (:-) ).