Thursday, September 27, 2012

Particle Swarm Optimization Algorithm For Single Machine Total

Particle Swarm Optimization Algorithm for Single Machine Total Weighted Tardiness Problem M. Fatih Tasgetiren Mehmet Sevkli Dept. of Management, Fatih University Dept. of Industrial Engineering, Fatih University 34500 Buyukcekmece, Istanbul, Turkey 34500 Buyukcekmece, Istanbul, Turkey Email: ftasgetiren@fatih.edu.tr Email: msevkli@fatih.edu.tr Yun-Chia Liang Gunes Gencyilmaz Dept. of Industrial Engineering and Management Dept. of Management, Yuan Ze University Istanbul Kultur University No 135 Yuan-Tung Road, Chung-Li, E5 Karayolu Uzeri, Sirinevler, Istanbul, Turkey Taoyuan County, Taiwan 320, R.O.C. Email: g.gencyilmaz@iku.edu.tr Email: ycliang@saturn.yzu.edu.tw Abstract: In

this paper we present a particle swarm towards the best particle in its restricted optimization algorithm to solve the single machine total weighted tardiness problem. A heuristic rule, the Smallest neighborhood [1]. Position Value (SPV) rule, is developed to enable the Since PSO was first introduced by Kennedy and continuous particle swarm optimization algorithm to be Eberhart [2, 3], it has been successfully applied to applied to all classes of sequencing problems, which are NP- optimize various continuous nonlinear functions. hard in the literature. A simple but very efficient local search method is embedded in the particle swarm optimization Although the applications of PSO on combinatorial algorithm. The computational results show that the particle optimization problems are still limited, PSO has swarm algorithm is able to find the optimal and best-known certain advantages such as easy to implement and solutions on all instances of widely used benchmarks from the computationally efficient. Therefore, this paper is OR libary. the first to employ PSO on solving single machine total weighted tardiness (SMTWT) problem which is I. INTRODUCTION a typical combinatorial optimization problem. Particle Swarm Optimization (PSO) is one of the McNaughton [4] first presented a scheduling latest population-based optimization methods, which problem that the objective is to minimize total does not use the filtering operation (such as penalty cost. He proved that an optimal solution crossover and/or mutation) and the members of the exists in which no job is split, so that only entire population are maintained through the search permutation schedules of the n jobs need to be procedure. In a PSO algorithm, each member is considered. Therefore, the SMTWT problem can be called “particle”, and each particle flies around in the stated as follows. Each of n jobs ( j = 1,..., n ) is to multi-dimensional search space with a velocity, be processed without preemption on a single which is constantly updated by the particle’s own machine that can handle no more than one job at a experience and the experience of the particle’s time. The processing and set-up requirements of any neighbors or the experience of the whole swarm. job are independent of its position in the sequence. Two variants of the PSO algorithm are developed, The release time of all jobs is zero. Thus, job j namely PSO with a local neighborhood, and PSO ( j = 1,..., n ) becomes available for processing at with a...

Website: www.fatih.edu.tr | Filesize: -
No of Page(s): 8
Download Particle Swarm Optimization Algorithm For Single Machine Total ....pdf

A novel genetic algorithm based on immunity

A novel algorithm, the immune genetic algorithm (IGA), is proposed based on the theory of immunity in biology which mainly constructs an immune operator accomplished by two steps: 1) a vaccination and 2) an immune selection. IGA proves theoretically convergent with probability 1. Strategies and methods of selecting vaccines and constructing an immune operator are also given. IGA is illustrated to be able to restrain the degenerate phenomenon effectively during the evolutionary process with examples of TSP, and can improve the searching ability and adaptability, greatly increase

the converging speed. Index Terms—Convergence, genetic algorithm, immune genetic algorithm, immunity, TSP. I. INTRODUCTION D URING the last three decades, there has been a growing interest in algorithms which rely on analogies to nat- ural phenomena such as evolution, heredity, and immunity. The emergence of massively parallel computers made these algorithms of practical interest. The genetic algorithm (GA) belongs to one category of these best known algorithms, whose beginnings can be traced back to the early 1950s when several biologists used computers for simulations of biological systems [1]. However, the work done in the late 1960s and the early 1970s at the University of Michigan under the direction of J. Holland led to GA as it is known today [2]. With the characteris- tics of easier application, greater robustness, and better parallel processing than most classical methods of optimization, GA has been widely used for combinatorial optimization [3], [4], structural designing [5], machine learning rule-based classifier systems [6], [7], and other engineering problems [8]–[10]. It is well known that GA pertains to searching al- gorithms with an iteration of generation-and-test. Two operators—crossover and mutation—give each individual the chance of optimization and ensure the evolutionary tendency with the selection mechanism of survival of the fittest. GA also proves to be convergent under the condition of maintaining the best individual found over time after selection [11]. Because the two genetic operators make individuals change randomly and indirectly during the whole process, they not only give the individuals the evolutionary chance but also cause certain degeneracy. In some cases, these degenerative phenomena are very obvious. On the other hand, there are many basic and obvious characteristics or knowledge in a pending problem. However the crossover and mutation operators in GA lack Manuscript received November 5, 1999; revised July 5, 2000. This work was supported by the National Science Foundation of China under Grant 69772029 and the Project “863.” This paper was recommended by Associate Editor W. Pedrycz. The authors are with the Key Lab for Radar Signal Processing, Xidian Uni- versity, Xi’an 710071, China. Publisher Item Identifier S 1083-4427(00)07992-3. the capability of meeting an actual situation, so that some torpidity appears when solving problems, which is conducive to the universality of the algorithm but neglects the assistant function of the characteristics or knowledge. The loss due to the negligence is sometimes considerable in dealing with some complex problems. It is also realized from...

Website: bit.csc.lsu.edu | Filesize: -
No of Page(s): 10
Download A novel genetic algorithm based on immunity ... - Computer Science.pdf

Wednesday, September 26, 2012

A multi-dimensional approach to force-directed layouts of large graphs

A multi-dimensional approach to force-directed layouts of large graphsComputational Geometry 29 (2004) 3–18 www.elsevier.com/locate/comgeo A multi-dimensional approach to force-directed layouts of large graphs ✩ Pawel Gajer a , Michael T. Goodrich b , Stephen G. Kobourov c,∗ a Department of Computer Science, Johns Hopkins University, USA b Department of Information and Computer Science, University of California, Irvine CA, USA c Department of Computer Science, University of Arizona, USA Available online 18 May 2004 Communicated by I. Streinu Abstract We present a novel hierarchical force-directed method for drawing large

graphs. Given a graph G = (V,E),the algorithm produces an embedding for G in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Such projections typically result in drawings that are “smoother” and more symmetric than direct drawings in 2D and 3D. In order to obtain fast placement of the vertices of the graph our algorithm employs a multi-scale technique based on a maximal independent set filtration of vertices of the graph. While most existing force-directed algorithms begin with an initial random placement of all the vertices, our algorithm attempts to place vertices “intelligently”, close to their final positions. Other notable features of our approach include a fast energy function minimization strategy and efficient memory management. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a 550 MHz Pentium PC. 2004 Elsevier B.V. All rights reserved. Keywords: Large graph drawing; Multi-scale method; High-dimensional embedding; Force-directed method 1. Introduction Graphs are common in many applications, from compilers to networks, from software engineering to databases. Typically, small graphs are drawn manually so that the resulting picture best shows ✩ This research partially supported by NSF under Grant CCR-9625289, and ARO under grant DAAH04-96-1-0013. A preliminary version of this paper appeared in the Proceedings of the 8th Annual Symposium on Graph Drawing, 2000. * Corresponding author. E-mail address: kobourov@cs.arizona.edu (S.G. Kobourov). 0925-7721/$ – see front matter 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.comgeo.2004.03.014 4 P. Gajer et al. / Computational Geometry 29 (2004) 3–18 the underlying relationships. The task of drawing graphs by hand becomes more challenging as the complexity and size of the graphs increases. Graph drawing tools have been the focus of the graph drawing community for at least the last two decades; see [11,30] for comprehensive reviews of the graph drawing field and [44] for work in information visualization. Numerous algorithms have been developed for drawing special classes of graphs such as trees and planar graphs. There are few general purpose graph drawing algorithms, however. Force-directed methods are often the methods of choice for drawing general graphs. Substantial interest in force-directed methods stems from their conceptual simplicity, applicability to general graphs, and typically aesthetically pleasing results. Automated...

Website: www.ics.uci.edu | Filesize: -
No of Page(s): 16
Download A multi-dimensional approach to force-directed layouts of large graphs.pdf

A fast layout algorithm for protein interaction networks

BIOINFORMATICS Vol. 19 no. 15 2003, pages 1882–1888 DOI: 10.1093/bioinformatics/btg346 A fast layout algorithm for protein interaction networks Kyungsook Han ∗ and Byong-Hyon Ju School of Computer Science & Engineering, Inha University, Inchon 402-751, Korea Received on May 1, 2003; revised and accepted on July 15, 2003 ABSTRACT Motivation: Graph drawing algorithms are often used for visualizing relational information, but a naive implementa- tion of a graph drawing algorithm encounters real difficulties when drawing large-scale graphs such as protein interaction

networks. Results: We have developed a new, extremely fast layout algorithm for visualizing large-scale protein interaction net- works in the three-dimensional space. The algorithm (1) first finds a layout of connected components of an entire net- work, (2) finds a global layout of nodes with respect to pivot nodes within a connected component and (3) refines the local layout of each connected component by first relocating midnodes with respect to their cutvertices and direct neigh- bors of the cutvertices and then by relocating all nodes with respect to their neighbors within distance 2. Advantages of this algorithm over classical graph drawing methods include: (1) it is an order of magnitude faster, (2) it can directly visual- ize data from protein interaction databases and (3) it provides several abstraction and comparison operations for effectively analyzing large-scale protein interaction networks. Availability: http://wilab.inha.ac.kr/interviewer/ Contact: khan@inha.ac.kr INTRODUCTION While traditional biochemical experiments had generated a small set of data for individual protein–protein interactions, the last three years have seen a rapid expansion of protein inter- action data due to the recent development of high-throughput interaction detection methods such as yeast two-hybrid (Ito et al., 2000) and mass spectrometry techniques. The inter- action data is available either in text files or in databases. However, due to the volume of data, a graphical representa- tion of protein interactions has proven to be much easier to understand than a long list of interacting proteins. Further- more, a network of protein interactions provides us with a clear notion of protein function by showing a context within which function can be interpreted. Protein–protein interactions are typically visualized as an undirected graph G = (V , E), where x, y ∈ V represent ∗ To whom correspondence should be addressed proteins and (x, y) ∈ E represents an interaction between proteins x and y. Visualization of a graph is straightforward when dealing with a small number of nodes and edges. In practice, protein–protein interaction networks often consist of thousands of nodes or more, which severely limit the usefulness of many graph drawing tools either because they produce cluttered drawings with many edge crossings or static drawings that are not easy to modify, they are too slow for interactive analysis with large data sets, or because they require input data to be in specific format rather than taking the data directly from protein–protein interaction databases. The ultimate usefulness of a protein interaction...

Website: bioinformatics.oxfordjournals.org | Filesize: -
No of Page(s): 7
Download A fast layout algorithm for protein interaction networks.pdf

A Multilevel Algorithm for Force-Directed Graph-Drawing

Journal of Graph Algorithms and Applications http://jgaa.info/ vol. 7, no. 3, pp. 253–285 (2003) A Multilevel Algorithm for Force-Directed Graph-Drawing Chris Walshaw School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Greenwich, London, SE10 9LS, UK. http://www.gre.ac.uk/∼c.walshaw C.Walshaw@gre.ac.uk Abstract We describe a heuristic method for drawing graphs which uses a mul- tilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is

repeated recursively to create a hierarchy of increasingly coarse graphs, G 0 ,G 1 ,...,G L . The coarsest graph, G L ,is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for G l is taken from its coarser and smaller child graph, G l+1 , and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on exam- ples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms. Keywords: graph-drawing, multilevel optimisation, force-directed place- ment. Communicated by Michael Kaufmann: submitted March 2002; revised August 2003. C. Walshaw, Multilevel Force-Directed Drawing, JGAA, 7(3) 253–285 (2003)254 1 Introduction Graph-drawing algorithms form a basic enabling technology which can be used to help with the understanding of large sets of inter-related data. By presenting data in a visual form it can often be more easily digested by the user and both regular patterns and anomalies can be identified. However most data sets do not contain any explicit information on how they should be laid out for easy viewing, although normally such a layout will depend on the relationships between pieces of data. Thus if we model the data points with the vertices of a graph and the relationships with the edges we can use graph-based technology and, in particular, graph-drawing algorithms to infer a ‘good’ layout from an arbitrary data set based on the relationships. There has been considerable research into graph-drawing in recent years and a comprehensive survey can be found in [2]. Many such algorithms are based on physical models and the vertices are placed so as to minimise the ‘energy’ in the physical system (see below, §2.3). Typically such algorithms are able to display structures and symmetries in the graph but their computational cost in terms of CPU time is very high. 1.1 Motivation The motivation behind our approach to graph-drawing...

Website: www.emis.de | Filesize: -
No of Page(s): 33
Download A Multilevel Algorithm for Force-Directed Graph-Drawing.pdf

Genetic algorithm learning and the cobweb model

Journal of Economic Dynamics and Control 18 (1994) 3-28. North-Holland Genetic algorithm learning and the cobweb model* Jasmina Arifovic McGill University, Montrkal, Qub. H3A 2T7, Canada Received March 1992, final version received December 1992 This paper presents the cobweb model in which competitive firms, in a market for a single good, use a genetic algorithm to update their decision rules about next-period production and sales. The results of simulations show that the genetic algorithm converges to the rational expectations equilibrium

for a wider range of parameter values than other algorithms frequently studied within the context of the cobweb model. Price and quantity patterns generated by the genetic algorithm are also compared to the data of experimental cobweb economies. It is shown that the algorithm can capture several features of the experimental behavior of human subjects better than three other learning algorithms that are considered. 1. Introduction Departure from the hypothesis that economic agents form rational expecta- tions implies that a specific learning algorithm has to be employed in order to describe the way in which agents make decisions about their economic behavior. On the other hand, application of a particular algorithm faces a criticism of the arbitrariness of choice. Lucas (1986) suggests that comparison of the behavior of learning algorithms with the behavior exhibited in experimental economies with human subjects may be a possible way to address this problem. Thus, if learning algorithms, when applied to the same economic environ- ment, result in different behavior, observations from laboratory experiments with human subjects may be used to determine which algorithm is more successful in describing actual human behavior. In this paper, a genetic algo- rithm (GA), developed by Holland (1970a), is used to model learning of Correspondence to." Jasmina Arifovic, Department of Economics, McGill University, 855 Sher- brooke Street W., Montr6al, Qu6. H3A 2T7, Canada. *This paper derives from my doctoral dissertation. My special thanks go to my advisors Robert Lucas, Thomas Sargent, and Michael Woodford for their valuable help and ideas. Helpful sugges- tions were also received on an earlier draft from three anonymous referees. 0165-1889/94/$06.00 © 1994--Elsevier Science Publishers B.V. All rights reserved 4 J. Arifovic, Genetic algorithm learning and the cobweb model economic agents in the cobweb model. The results obtained with the application of the GA are compared to the behavior observed in cobweb experiments with human subjects and to the results obtained in studies of other learning algo- rithms within the context of the same model. The objective is to examine if the GA can account for some of the results of the experimental economies which differ from the predictions of other adaptive schemes. The cobweb model is a model of a market for a single good in which firms that are price takers make their production decision in every time period before they observe a market price. Total quantity supplied and the exogenously given demand...

Website: web.uvic.ca | Filesize: -
No of Page(s): 26
Download Genetic algorithm learning and the cobweb model*.pdf

A Novel Particle Swarm Optimization Algorithm

A Novel Particle Swarm Optimization Algorithm Shahriar Asta1 and A. Şima Uyar1, 1 Computer & Informatics Faculty Istanbul Technical University, Istanbul, Turkey {asta, etaner}@itu.edu.tr Abstract. In this study a novel memory based particle swarm optimization algorithm is presented. This algorithm utilizes external memory. A set of globally found best and worst positions, along with their parameters are stored in two separate external memories. At each iteration, a coefficient, based on the distance of the current particle to the closest best

and closest worst particles is calculated. When updating the velocity component, this coefficient is added to the current velocity of the particle with a certain probability. Also randomized upper and lower bound values have been defined for the inertia component. The algorithm is tested on benchmark functions and it is shown empirically that it converges faster to the optima. It also outperforms the PSO and a recent improved PSO, as well as maintaining a superior precision in comparison. Convergence speed is particularly important since the method will be used in a realistic robot motion simulator environment in which the simulation time is long enough to make convergence speed a primary concern. Keywords: Particle Swarm Optimization, External Memory. 1 Introduction Particle Swarm Optimization (PSO) is a nature inspired meta-heuristic method. This method was first introduced by Kennedy and Eberhart in 1995 [1]. It is inspired by the swarm behavior of birds flocking, and utilizes this behavior to guide the particles to search for globally optimal solutions. Basically, in PSO, a population of particles is spread randomly throughout the search space. The particles are assumed to be flying in the search space. The velocity and position of each particle is updated iteratively based on personal and social experiences. Each particle possesses a local memory in which the best so far achieved experience is stored. Also a global memory keeps the best solution found so far. The sizes of both memories are restricted to one. The local memory represents the personal experience of the particle and the global memory represents the social experience of the swarm. The balance between the effect of the personal and social experiences are maintained using randomized correction coefficients. The philosophy behind the velocity update procedure is to reduce the distance between the particle and the best personal and social known locations. PSO is very easy to implement and there have been many successful implementations in several real world applications. PSO is a population based heuristic approach. It can get stuck in local optima when dealing with complex multimodal functions. This is why accelerating the convergence speed as well as avoiding the local optima are two primary goals in PSO research. Multiple methods and approaches have been suggested to improve the performance of the original PSO in terms of these goals. In [2], these efforts have been divided into four categories. The first category includes the parameter selection methods....

Website: web.itu.edu.tr | Filesize: -
No of Page(s): 12
Download A Novel Particle Swarm Optimization Algorithm.pdf