Tuesday, October 23, 2012

Training Feedforward Neural Networks Using Genetic Algorithms

Multilayered feed forward neural networks possess a number of properties which make them particularly suited to complex pattern classification problems. However, their application to some real- world problems has been hampered by the lack of a training algorithm which reliably finds a nearly globally optimal set of weights in a relatively short time. Genetic algorithms are a class of optimization procedures which are good at exploring a large and complex space in an intelligent way to find values close to the global optimum. Hence, they are well suited to the problem of training

feed for ward networks. In this paper, we describe a set of experiments performed on data from a sonar image classification problem. These experiments both 1) illustrate the improvements gained by using a genetic algorithm rather than back propagation and 2) chronicle the evolution of the performance of the genetic algorithm as we added more and more domain-specific knowledge into it. 1 Introduction Neural networks and genetic algorithms are two techniques for optimization and learning, each with its own strengths and weaknesses. The two have generally evolved along seperate paths. However, recently there have been attempts to combine the two technologies. Davis (1988) showed how any neural network can be rewritten as a type of genetic al gorithm called a classifier system and vice versa. Whitley (1988) attempted unsuccessfully to train feedforward neural networks using genetic algorithms. In this paper we de scribe a different genetic algonthm for training teedforward networks. It not only succeeds in its task but it outperforms backpropagation, the standard training algonthm. on a diffi cult example. This success comes from tailoring the genetic algonthm to the domain of training neural networks. We document the evolution and ultimate success of this algo nthm with a series of experiments. The paper is structured as follows. Sections 2 and 3 give an overview of neural networks and genetic algonthms re spectively with a special emphasis on their strengths and weaknesses. Section 4 describes the data on which the ex- penments were run. Section 5 details the genetic algorithm we used to perform neural network weight optimization. Section 6 describes the experiments we ran and analyzes their results. Section 7 provides conclusions about our work and suggestions for future work. 2 Neural Networks Neural networks are algorithms for optimization and learn ing based loosely on concepts inspired by research into the nature of the brain. They generally consist of five compo nents: 1. A directed graph known as the network topology whose arcs we refer to as links. 2. A state variable associated with each node. 3. A real-valued weight associated with each link. 4. A real-valued bias associated with each node. 5. A transfer function for each node which determines the state of a node as a function of a) its bias b, b) the weights, wt of its incoming links, and c) the states, x,, of the nodes connected to it by these links. This transfer function usually...

Website: ijcai.org | Filesize: -
No of Page(s): 6
Download Training Feedforward Neural Networks Using Genetic Algorithms.pdf

No comments:

Post a Comment