Higher-order Neural Network or Functional-link Network.

Either name is given to neural networks which expand the standard feedforward, back-propagation architecture to include nodes at the input layer which provide the network with a more complete understanding of the input. Basically, the inputs are transformed in a well understood mathematical way so that the network does not have to learn some basic math functions. These functions do enhance the network's understanding of a given problem. These mathematical functions transform the inputs via higher-order functions such as squares, cubes, or sines. It is from the very name of these functions, higher-order or functionally linked mappings, that the two names for this same concept were derived.

This technique has been shown to dramatically improve the learning rates of some applications. An additional advantage to this extension of back-propagation is that these higher order functions can be applied to other derivations - delta bar delta, extended delta bar delta, or any other enhanced feedforward, back-propagation networks.

There are two basic ways of adding additional input nodes. First, the cross-products of the input terms can be added into the model. This is also called the output product or tensor model, where each component of the input pattern multiplies the entire input pattern vector. A reasonable way to do this is to add all interaction terms between input values. For example, for a back-propagation network with three inputs (A, B and C), the cross-products would include: AA, BB, CC, AB, AC, and BC. This example adds second-order terms to the input structure of the network. Third-order terms, such as ABC, could also be added.

The second method for adding additional input nodes is the functional expansion of the base inputs. Thus, a back-propagation model with A, B and C might be transformed into a higher-order neural network model with inputs: A, B, C, SIN(A), COS(B), LOG(C), MAX(A,B,C), etc. In this model, input variables are individually acted upon by appropriate functions. Many different functions can be used. The overall effect is to provide the network with an enhanced representation of the input. It is even possible to combine the tensor and functional expansion models together.

No new information is added, but the representation of the inputs is enhanced. Higher-order representation of the input data can make the network easier to train. The joint or functional activations become directly available to the model. In some cases, a hidden layer is no longer needed. However, there are limitations to the network model. Many more input nodes must be processed to use the transformations of the original inputs. With higher-order systems, the problem is exacerbated. Yet, because of the finite processing time of computers, it is important that the inputs are not expanded more than is needed to get an accurate solution.

Functional-link networks were developed by Yoh-Han Pao and are documented in his book, Adaptive Pattern Recognition and Neural Networks. Pao draws a distinction between truly adding higher order terms in the sense that some of these terms represent joint activations versus functional expansion which increases the dimension of the representation space without adding joint activations. While most developers recognize the difference, researchers typically treat these two aspects in the same way. Pao has been awarded a patent for the functional-link network, so its commercial use may require royalty licensing.

Self-Organizing Map into Back-Propagation.

A hybrid network uses a self-organizing map to conceptually separate the data before that data is used in the normal back-propagation manner. This map helps to visualize topologies and hierarchical structures of higher-order input spaces before they are entered into the feedforward, back-propagation network. The change to the input is similar to having an automatic functional-link input structure. This self-organizing map trains in an unsupervised manner. The rest of the network goes through its normal supervised training.

HOME