Tuesday, September 25, 2012

A Parallel Particle Swarm Optimization Algorithm With Communication

A Parallel Particle Swarm Optimization Algorithm With Communication Strategies Shu-Chuan Chu1 John F. Roddick1 Jeng-Shyang Pan2 1 School of Informatics and Engineering, Flinders University of South Australia, PO Box 2100, Adelaide 5001, South Australia. Email: {shuchuan.chu, roddick}@infoeng.flinders.edu.au 2 Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, 415 Chien-Kung Rd, Kaohsiung, Taiwan. Email: jspan@cc.kuas.edu.tw Abstract A parallel version of the particle swarm optimization (PPSO) algorithm is presented together with three communication strategies which can be used accord- ing to

the independence of the data. The first strat- egy is designed for the parameters of solutions that are independent or are only loosely correlated such as the Rosenbrock and Rastrigrin functions. The second communication strategy can be applied to those pa- rameters that are more strongly correlated such as the Griewank function. In cases where the properties of the parameters are unknown, a third hybrid commu- nication strategy can be used. Experimental results demonstrate the usefulness of the proposed PPSO al- gorithm. Keywords: Parallel particle swarm optimization. PPSO. Communication Strategies. Rosenbrock, Ras- trigrin and Griewank functions. 1 Introduction The particle swarm optimization (PSO) algorithm is based on the evolutionary computation technique (Eberhart & Kennedy 1995, Kennedy & Eberhart 1995). PSO is a population based evolutionary algo- rithm and has similarities to the general evolutionary algorithm. However, PSO is motivated from the sim- ulation of social behavior which differs from the nat- ural selection scheme of genetic algorithms (Goldberg 1989, Davis 1991, Gen & Cheng 1997). The metaphor is that of multiple collections (a swarm) of objects moving in space and thus objects are said to possess position and velocity and are influenced by the others in the swarm. PSO processes the search scheme using popula- tions of particles which correspond to the use of individuals in genetic algorithms. Each particle is equivalent to a candidate solution of a problem. The particle will move according to the adjusted velocity that is based on the corresponding parti- cles experience and the experience of its compan- ions. For the D-dimensional function f(.), the ith particle for the tth iteration can be represented as Xti = (xti(1),xti(2),...,xti(D)). Assume that the best previous position of the ith particle at the tth iteration is represented as Pti = (pti(1),pti(2),...,pti(D)), then f(Pti ) ≤ f(Pt−1i ) ≤ ... ≤ f(P1i ). The velocity of the ith particle at the tth iteration can be repre- sented as V ti = (vti(1),vti(2),...,vti(D)). Gt = (Xt(1),Xt(2),...,Xt(D)) is defined as the best po- sition amongst all particles from the first iteration to the tth iteration, where best is defined by some func- tion of the swarm. The original particle swarm optimization algo- rithm can be expressed as follows: V t+1i = V ti +C1.r1.(Pti −Xti)+C2.r2.(Gt −Xti) (1) Xt+1i = Xti +V t+1i ,i = 0,...N −1 (2) where N is particle size and −Vmax ≤ V t+1i ≤ Vmax, (Vmax is...

Website: iiirc.hitsz.edu.cn | Filesize: -
No of Page(s): 4
Download A Parallel Particle Swarm Optimization Algorithm With Communication.pdf

No comments:

Post a Comment