The use of artificial neural networks (ANNs) models has grown considerably over the last decade. One of the difficulties in using ANNs is the fact that in most cases there are several numbers of input variables available. In the past, there was a tendency to use a large number of inputs in ANNs applications. This can have a number of detrimental effects on the network during training and it also requires a greater amount of data to efficiently estimate the connection weights. Additional inputs tend to increase the required time for training and the risk of the training algorithm becoming stuck in a local minimum. A large number of inputs also increases the risk of including spurious variables that merely increase the noise in the forecasts. Consequently, it is important to use an appropriate selection technique of the input variables in order to obtain the smallest number of independent inputs that are useful predictors for the system which is being researched. The aim of this paper is to review techniques that will allow the selection of appropriate model inputs based particularly on mutual information and genetic algorithms.
|Journal||Neural Computing & Applications|
|Publication status||Published - 1 Jan 2009|