# 01 The origin of the algorithm

Particle swarm optimization algorithm (PSO) It's an evolutionary computing technology (evolutionary computation),1995 Year by year Eberhart Doctor and kennedy The doctor put forward , From the study of birds' predation behavior . The algorithm was initially inspired by the regularity of bird cluster activities , And then we use swarm intelligence to build a simplified model . Particle swarm optimization algorithm is based on the observation of the behavior of animal clusters , By using the information sharing of individuals in the group, the movement of the whole group evolves from disorder to order in the problem solving space , So as to obtain the optimal solution .

# 02 What is particle swarm optimization ？

## 2.1 The official definition of （ Reference Encyclopedia ）

Particle swarm optimization , Also known as particle swarm optimization algorithm or bird foraging algorithm （Particle Swarm Optimization）, Abbreviation for PSO, It is in recent years that J. Kennedy and R. C. Eberhart And so on (Evolutionary Algorithm - EA).PSO Algorithm is a kind of evolutionary algorithm , Similar to simulated annealing , It also starts from random solutions , Find the best solution by iteration , It also evaluates the quality of solutions by fitness , But it's simpler than genetic algorithm rules , It doesn't have a genetic algorithm “ cross ”(Crossover) and “ variation ”(Mutation) operation , It finds the global optimum by following the current optimal value . This algorithm is easy to implement 、 High precision 、 Fast convergence and other advantages have attracted the attention of the academic community , And it shows its superiority in solving practical problems . Particle swarm optimization is a parallel algorithm .

## 2.2 Let's talk about it

As described earlier ,PSO It's simulating the predatory behavior of birds . Imagine a scenario like this ： A flock of birds are randomly searching for food . There's only one piece of food in this area . All the birds don't know where the food is . But they know how far away they are from the food . So what's the best strategy for finding food . The simplest and most effective method is to search the area around the nearest bird to the food .

The birds throughout the search , By communicating their messages to each other , Let the other birds know where they are , Through this collaboration , To determine whether they find the optimal solution , At the same time, the information of the optimal solution is also transmitted to the whole bird group , Final , The whole flock can gather around the food source , That is to find the optimal solution .

PSO in , The solution of every optimization problem is a bird in the search space . We call it “ The particle ”. All particles have a fitness determined by the optimized function (fitness value), Each particle also has a velocity that determines the direction and distance they fly . Then the particles follow the current optimal particle and search in the solution space .

PSO Initialize to a bunch of random particles ( Random solution ). Then we find the optimal solution by iteration . In each iteration , Particles track two " extremum " To update yourself . The first is the optimal solution found by the particle itself , This solution is called individual extremum pBest. The other extreme value is the optimal solution of the whole population , This extremum is a global extremum gBest. In addition, instead of using the whole population, we can use only a part of it as the neighbor of the particle , Then the extremum in all neighbors is the local extremum .

## 2.3 More popular description

The basic idea of particle swarm optimization is to find the optimal solution through the cooperation and information sharing among individuals in the group . The scene above . Think of a flock of birds looking for food , There's only one bug in this area , All the birds don't know where the food is . But they know how far away they are from the food , And they know where the nearest bird to the food is . Think about what's going to happen at this time ？

At the same time, the distance between each bird and its food is constantly changing when its position is constantly changing , So every bird must have been closest to its food , This is also a reference for them .

therefore , Particle swarm optimization algorithm is the birds as a particle , And they have position and speed , Then change your flight direction according to the nearest food solution you have found and the nearest solution you have shared in the whole cluster , In the end, we'll find out , The whole cluster converges to the same place . And this is the area closest to the food , If the conditions are good, we'll find food .

# 03 Particle abstraction

## 3.1 About speed and position

Particle swarm optimization (PSO) simulates the birds in a flock by designing a kind of massless particles , Particles have only two properties ： Speed and position , Speed is the speed of movement , The position represents the direction of movement .

Birds are abstracted as particles without mass or volume ( spot ), And extend to N Dimensional space , The particle i stay N The position of a dimensional space is expressed as a vector Xi＝(x1,x2,…,xN), The flight speed is expressed as a vector Vi＝(v1,v2,…,vN). Each particle has a fitness determined by the objective function (fitness value), And know the best position I've found so far (pbest) And where it is now Xi. This can be seen as the particle's own flight experience . besides , Each particle also knows the best location so far for all particles in the entire population to find (gbest)(gbest yes pbest The best value in ), This can be seen as the experience of particle companions . The particle is through their own experience and the best experience of their peers to determine the next step of motion .

## 3.2 Speed and position updates

PSO Initialize to a bunch of random particles ( Random solution ). Then we find the optimal solution by iteration . In each iteration , Particles track two “ extremum ”(pbest,gbest) To update yourself . After finding these two optimal values , Particles update their velocity and position by the following formula .

For the formula (1)：

The formula (1) Of the ① Part of it is called 【 Memory item 】, Indicates the influence of the speed and direction of the last time ;

The formula (1) Of the ② Part of it is called 【 Self cognition item 】, Is a vector from the current point to the best point of the particle itself , It means that the action of a particle comes from its own experience ;

The formula (1) Of the ③ Part of it is called 【 Group cognition item 】, It's a vector from the current point to the best point of the population , It reflects the cooperation and knowledge sharing among particles . The particle is through their own experience and the best experience of their peers to determine the next step of motion .

Based on the above two formulas , Let's look at another formula ：

** The formula (2) and The formula (3) Be regarded as the standard PSO Algorithm .**

# 04 standard PSO Algorithm flow

## 4.1 standard PSO Algorithm flow

1） Initialize a bunch of particles ( The group size is N), Including random position and speed ;

2） Evaluate the fitness of each particle ;

3） For every particle , Put its fitness value and the best position it passes through pbest The comparison , If better , Take it as the best position for the moment pbest;

4） For every particle , Put its fitness value and the best position it passes through gbest The comparison , If better , Take it as the best position for the moment gbest;

5） According to the formula (2)、(3) Adjust the particle velocity and position ;

6） If the ending condition is not met, turn to 2） Step .

According to the specific problem, the termination condition is generally selected as the maximum number of iterations Gk or ( and ) So far, the optimal position of particle swarm optimization meets the predetermined minimum adaptive threshold .

## 4.2 PSO Flow chart

## 4.3 Learning factors c1、c2 analysis

The formula (2) and (3) in pbest and gbest They represent the local and global optimal positions of particle swarm optimization respectively .

When C1＝0 when , Then particles have no cognitive ability , It's a social model (social-only)：

It's called the global PSO Algorithm . Particles have the ability to expand the search space , It has a faster convergence rate , But because of the lack of local search , For complex problems

Comparison standard PSO It's easier to fall into local optimum .When C2＝0 when , There is no social information between the particles , The model becomes only cognitive (cognition-only) Model ：

It's called local PSO Algorithm . Because there is no information exchange between individuals , The whole population is equivalent to multiple particles for blind random search , Slow convergence , So the possibility of getting the optimal solution is small .

# 05 Code example explanation

## 5.1 Let's start with a simple example

In this case , We chose a solution function y=-x*(x-1) stay [0,2] The particle swarm optimization algorithm based on the maximum value of . And then output the result through the step-by-step tracking algorithm , Let's talk about the motion of particles .

Now let's look at the code and code comments .

`public class AlgorithmPSO {`

int n=2; // The number of particles , For the convenience of demonstration , We only take two , Observe the direction of its movement

double[] y;

double[] x;

double[] v;

double c1=2;

double c2=2;

double pbest[];

double gbest;

double vmax=0.1; // Maximum speed

// Fitness calculation function , Every particle has its fitness

public void fitnessFunction(){

for(int i=0;i<n;i++){

y[i]=-1*x[i]*(x[i]-2);

}

}

public void init(){ // initialization

x=new double[n];

v=new double[n];

y=new double[n];

pbest=new double[n];

/***

* It should have been generated randomly , For the convenience of demonstration , I'm going to drop two points randomly here , On both sides of the maximum

*/

x[0]=0.0;

x[1]=2.0;

v[0]=0.01;

v[1]=0.02;

fitnessFunction();

// Initialize the optimal position of the current individual , And find the best position of the group

for(int i=0;i<n;i++){

pbest[i]=y[i];

if(y[i]>gbest) gbest=y[i];

}

System.out.println(" Algorithm start , The initial optimal solution :"+gbest);

System.out.print("\n");

}

public double getMAX(double a,double b){

return a>b?a:b;

}

// Particle swarm optimization

public void PSO(int max){

for(int i=0;i<max;i++){

double w=0.4;

for(int j=0;j<n;j++){

// Update position and speed , Here are the two formulas we focused on before .

v[j]=w*v[j]+c1*Math.random()*(pbest[j]-x[j])+c2*Math.random()*(gbest-x[j]);

if(v[j]>vmax) v[j]=vmax;// The control speed does not exceed the maximum value

x[j]+=v[j];
// Cross border judgment , The scope is limited to [0, 2]

if(x[j]>2) x[j]=2;

if(x[j]<0) x[j]=0;
}

fitnessFunction();

// Update individual extremum and group extremum

for(int j=0;j<n;j++){

pbest[j]=getMAX(y[j],pbest[j]);

if(pbest[j]>gbest) gbest=pbest[j];

System.out.println(" The particle n"+j+": x = "+x[j]+" "+"v = "+v[j]);

}

System.out.println(" The first "+(i+1)+" Sub iteration , Global optimal solution gbest = "+gbest);

System.out.print("\n");

}
}

// Run our algorithm

public static void main(String[] args){

AlgorithmPSO ts=new AlgorithmPSO();

ts.init();

ts.PSO(10);// For the convenience of demonstration , Let's iterate for a while 10 Time .

}
}

Output results ：

```
Algorithm start , The initial optimal solution :0.0
The particle n0: x = 0.004 v = 0.004
```

The particle n1: x = 0.0 v = -4.065770842472382

The first 1 Sub iteration , Global optimal solution gbest = 0.007984
The particle n0: x = 0.01778510589090629 v = 0.013785105890906289

The particle n1: x = 0.0 v = -1.625639647649872

The first 2 Sub iteration , Global optimal solution gbest = 0.03525390179026183
The particle n0: x = 0.0610276658084214 v = 0.04324255991751511

The particle n1: x = 0.0 v = -0.6035255880722042

The first 3 Sub iteration , Global optimal solution gbest = 0.11833095562281844
The particle n0: x = 0.1610276658084214 v = 0.1

The particle n1: x = 0.0 v = -0.012719944703824898

The first 4 Sub iteration , Global optimal solution gbest = 0.29612542246113416
The particle n0: x = 0.2610276658084214 v = 0.1

The particle n1: x = 0.06231495466940402 v = 0.06231495466940402

The first 5 Sub iteration , Global optimal solution gbest = 0.4539198892994499
The particle n0: x = 0.3610276658084214 v = 0.1

The particle n1: x = 0.16231495466940402 v = 0.1

The first 6 Sub iteration , Global optimal solution gbest = 0.5917143561377656
The particle n0: x = 0.46102766580842136 v = 0.1

The particle n1: x = 0.262314954669404 v = 0.1

The first 7 Sub iteration , Global optimal solution gbest = 0.7095088229760813
The particle n0: x = 0.5610276658084213 v = 0.1

The particle n1: x = 0.362314954669404 v = 0.1

The first 8 Sub iteration , Global optimal solution gbest = 0.8073032898143969
The particle n0: x = 0.6610276658084213 v = 0.1

The particle n1: x = 0.462314954669404 v = 0.1

The first 9 Sub iteration , Global optimal solution gbest = 0.8850977566527127
The particle n0: x = 0.7610276658084213 v = 0.1

The particle n1: x = 0.562314954669404 v = 0.1

The first 10 Sub iteration , Global optimal solution gbest = 0.9428922234910285

Now let's look at the displacement of the two particles x Changes in each iteration （ Distance from food ）.

1) The initial state

The particle n0: x = 0.0 v = 0.01

The particle n1: x = 2.0 v = 0.02

image

Two particles at both ends of the interval .

2) The first iteration

The particle n0: x = 0.004 v = 0.004

The particle n1: x = 0.0 v = -4.065770842472382

Both particles are at the origin .

3) second 、 3、 ... and …… Ten iterations

You can see , Two particles are getting closer to the best . The above circles are the process of their gathering , You can see it , Aggregation is an increasingly intensive process . That's what it is. 10 It's just an iteration . If we increase the number of iterations , It's easy to find the best solution . Finally, let's put in an iteration 100 Results of ：

I believe that through this simple example . We have a very clear understanding of the particle swarm optimization algorithm .

# 06 PSO and GA Compare

# 6.1 Generality

(1) All belong to bionic algorithm .

(2) All belong to global optimization methods .

(3) All belong to random search algorithm .

(4) Both imply parallelism .

(5) Search based on individual fitness information , So it's not affected by functions The constraints of constraints , Continuity, for example 、 Differentiability, etc .

(6) For high dimensional complex problems , Premature convergence and convergence are often encountered The disadvantage of poor performance , There is no guarantee of convergence to the best .

# 6.2 differences

(1) PSO There is memory , Well, the knowledge of the solution is that all particles preserve save , and GA, Previous knowledge changes with the population .

(2) PSO The particles in share information only through the current search for the best , So, to a large extent, this is a single shared item information mechanism . and GA in , Chromosomes share information with each other , Make the whole population move to the optimal region .

(3) GA The coding technology and genetic operation of the system are relatively simple , and PSO be relative to GA, There are no crossover and mutation operations , Particles are only updated by internal velocity , So it's simpler 、 Fewer parameters 、 It's easier to implement .

# 07 Code acquisition

To get the code , Please pay attention to our official account of WeChat 【 Program ape 】, Reply in the background ：**PSO**. You can download it. .

WeChat official account

## 【 Intelligent algorithm 】 Particle swarm optimization （Particle Swarm Optimization） Super detailed analysis + Introduction code examples to explain more related articles

- Particle swarm optimization algorithm （Particle Swarm Optimization）
The idea of particle swarm optimization comes from the analysis of birds / A study of the predation behavior of fish , Simulate the behavior of birds flocking for food , Birds cooperate with each other to achieve the best goal , It's based on Swarm Intelligence Optimization method . It doesn't have a genetic algorithm " hand over ...

- A novel multi-swarm particle swarm optimization with dynamic learning strategy( A novel multi swarm particle swarm optimization algorithm with dynamic learning strategy )
1. The core The particles in each subpopulation are divided into ordinary particles (ordinary particles) And exchange particles (communication particles), In each iteration , Different particles perform different evolutionary operations . Ordinary grains ...

- Particle swarm optimization Particle Swarm Optimization, PSO( Post collection )
Particle swarm optimization (1)---- Introduction to particle swarm optimization http://blog.csdn.net/niuyongjie/article/details/1569671 Particle swarm optimization (2)---- Standard particle swarm optimization htt ...

- Algorithm （ 3、 ... and ） Particle swarm optimization PSO Introduction to
One . introduction Before we talk about algorithms , Let's look at two examples : Example a : knapsack problem , A schoolbag , A pile of things , Every object has its own value and volume , Full of schoolbags , To maximize the value of the loaded items . Example 2 : Investment issues ,n A project , The first i The project investment is ci Yield is p ...

- C Language implementation of particle swarm optimization algorithm （PSO） Two
Last time I talked about the implementation of basic particle swarm optimization , And gives C The language code . This article focuses on an important parameter that affects particle swarm optimization ---w. We have said that the two core formulas of PSO are : Vid(k+1)=w*Vid(k)+c1*r1*( ...

- C Language implementation of particle swarm optimization algorithm (PSO) One
I've been reviewing C Language , The book I read is <C primer Plus>, I suddenly remember when I participated in mathematical modeling , Some of the intelligent algorithms used , Like genetic algorithms . Particle swarm optimization . Ant colony algorithm and so on . It was using MATLAB To achieve , and ...

- Grouping knapsack based on particle swarm optimization MATLAB Realization
Take time to see the particle swarm optimization algorithm for a period of time , This paper only summarizes the knapsack problem in dynamic programming , Other applications can be added later . One : Brief introduction to group knapsack problem Suppose there is 3 A set of , Each group has 2 Items , Each item has 3 Species attribute , value . body ...

- It's based on particle swarm optimization TSP problem （JAVA）
One .TSP problem TSP problem (Travelling Salesman Problem) It's the traveling salesman problem , Traveling salesman problem . The delivery man has a problem , It's one of the famous problems in the field of mathematics . Suppose there is a traveling salesman to visit n Cities , He has to choose ...

- Particle swarm optimization （PSO） Algorithm analysis （ Abridged edition ）
Particle swarm optimization (PSO) 1. Particle swarm optimization (PSO) It's a population-based stochastic optimization technique : Initialize to a set of random solutions , Search for the best solution by iteration . PSO The algorithm flow is shown in the figure ( This picture is from PPT do , Copied over , It's kind of fuzzy ) 2.PSO model ...

## Random recommendation

- php route
// Magic variable , Get the absolute path of the current file echo "__FILE__: ========> ".__FILE__; echo '<br/>'; // Magic variable , Get when ...

- Android Medium Context
Context Interface for accessing global information , For example, the resources of the studio program . Some common components are inherited from Context, The purpose is convenient access to resources , such as Activity, Service.... from Context Access the resources of this component ...

- Primitive JS completion of AJAX
Firstly , let us explain XMLHttpRequest open(), send(), readyState 1. open(method, url, async, user, ...

- c# Realize the printing function , You can set the paper size , Fonts and colors, etc
/// <summary> /// Print button /// </summary> /// <param name="sender"></par ...

- How to stop iframe Automatically jump to the web page quoted in
I made a webpage today , On the web page http://www.58shuwu.com/to/21766654/Legend%20of%20Miyue/ There's a iframe, Then apply other websites . Use http://m ...

- endnote X7 Reference indent settings
Start by opening [endnote] Software , stay "edit-output styles-edit( The name of the document format you choose )" Click... In the dialog box "bibliography-layout&quo ...

- 【Oracle】 Environment variables and listening files
One . The meaning of environmental variables : Database home directory ORACLE_HOME=D:\app\Administrator\product\11.2.0\dbhome_1 Listen to the directory where the file is located TNS_ADMIN=D:\a ...

- 【CSS Study 】--- Text horizontal alignment properties text-align Align attributes vertically with elements vertical-align
One . Text horizontal alignment properties ---text-align text-align Attribute is to align the block level label with the contents of the cell , Inline elements in block level tags are moved as a whole , A child block level element or a child cell inherits from the parent element te ...

- Internet operations +SEO： Recommended must see 5 books
This article was first published in : Fengyun community (scoee.com) Recently began to study and Research Internet operation and SEO, For me, Xiaobai , It's still hard , After all, I have never been in touch with this aspect , Although I have done pre-sales and product related work in the previous software company , But after all, with the Internet ...

- vue.js Initial study notes &vue-cli
Take notes : vue.js install , Reference resources : http://www.cnblogs.com/wisewrong/p/6255817.html (vue-cli) http://www.cnblogs.com ...