Particle Swarm Optimization Algorithm In Matlab code

The Particle Swarm Optimisation (PSO)(code) algorithm was inspired by the social and biological behaviour of bird flocks searching for food sources.



PROJECT HELP MATLAB PROJECTS

Abstract

The Particle Swarm Optimisation (PSO) algorithm was inspired by the social and biological behaviour of bird flocks searching for food sources. In this nature-based algorithm, individuals are referred to as particles and fly through the search space seeking for the global best position that minimises (or maximises) a given problem. Today, PSO is one of the most well-known and widely used swarm intelligence algorithms and metaheuristic techniques, because of its simplicity and ability to be used in a wide range of applications. However, in-depth studies of the algorithm have led to the detection and identification of a number of problems with it, especially convergence problems and performance issues. Consequently, a myriad of variants, enhancements and extensions to the original version of the algorithm, developed and introduced in the mid-1990s, have been proposed, especially in the last two decades. In this article, a systematic literature review about those variants and improvements is made, which also covers the hybridisation and parallelisation of the algorithm and its extensions to other classes of optimisation problems, taking into consideration the most important ones. These approaches and improvements are appropriately summarised, organised and presented, in order to allow and facilitate the identification of the most appropriate PSO variant for a particular application.


Particle  Algorithms

Talk to Expert

Introduction

The Particle Swarm Optimisation (PSO) technique was proposed and initially developed by the electrical engineer Russell C. Eberhart and the social psychologist James Kennedy. The method was described in two papers co-authored by those two authors and published in 1995, one of them having as its title the exact name of the technique they proposed.

This technique had (and still has) a deep connection with some social relations, concepts and behaviours that emerged from a computational study and simulation of a simplified social model of a bird flock seeking for food conducted by those authors, and it belongs to the so-called swarm intelligence, an important and extensive research area within natural computing.

The PSO method is based on the premise that the knowledge lies not only in the social sharing of information among generations but also between elements of the same generation. Although PSO has some characteristics that, in some sense and to a certain extent, have some similarity to those found in other population-based computational models, such as Genetic Algorithms (GA) and other evolutionary computing techniques, it has the benefit of being relatively simple, and its algorithm is comparatively easy to describe and implement.

In fact, its simplicity and apparent competence in finding optimal solutions in complex search spaces led the PSO algorithm to become well known among the scientific community, which contributed to its study and improvement. Thus, many approaches were suggested and different applications were tested with it, especially over the past decade. This review is intended to summarise all the main developments related to the PSO algorithm, from its original formulation up to current developments.

Related Work

A swift explanation is presented in this section for the general related studies in the PSO algorithm.

Poli et al. presented an overview of the great eforts which have given impetus and direction to research in particle swarms, as well as some important new applications and directions. An analysis of IEEE Xplore and Google Scholar citations and publications from 1995 to 2006 were presented in this work, illuminating the sense meant by Kennedy and Eberhart [92]. The strength of this study was to present comprehensive challenges and open issues in the PSO algorithm. However, this study did not consider the compatibility of PSO application with each presented approach.

Banks et al. ofered, in two parts, a timely and brief review of the feld in general, alongside the opportunities and challenges emanating from the versatile application of PSO. On the one hand, part I has considered the history and background of PSO and its position within the broader paradigm of natural computing. The review then continued to discuss diferent improvements to the native formulation of PSO both in discrete and continuous problems, swarm behavior analysis, and measures considered to address stagnation. Furthermore, the review focused on research regarding adaptations for parallel implementation, algorithm confguration, and dynamic environments. The achievement of this study was identifying two signifcant areas of challenge for future further development: swarm stagnation and dynamic environments

Particle  Algorithms

Modifications to the Particle Swarm Optimisation

Other different aspects of the original version of PSO have also been modified, and many variants have been proposed to address different kinds of problems; e.g., a discrete binary version of PSO that is useful for combinatorial optimisation problems, such as the travelling salesman problem and task scheduling problems.

Over time, PSO gained even more attention, and thus, more research was being done on it (see, e.g., for an analysis of the trajectory and velocity of each particle during the execution of the PSO algorithm). This led many researchers to begin noticing problems with the original version of PSO, such as premature convergence (especially in multimodal domains) or performance issues (see, e.g., wherein the number of fitness evaluations is reduced by using an estimated fitness value for each particle).

Premature convergence happens when some particle finds a local best position in the swarm that is not the global best solution to the problem. Other particles will then mistakenly fly towards it, without exploring other regions of the search space. In consequence, the algorithm will be trapped into that local optimum and will converge prematurely.