diff --git a/README.md b/README.md index b12664a..c366189 100755 --- a/README.md +++ b/README.md @@ -1,24 +1,24 @@ # BioPy -####Overview: +#### Overview: ---- BioPy is a collection (in-progress) of biologically-inspired algorithms written in Python. Some of the algorithms included are more focused on artificial model's of biological computation, such as Hopfield Neural Networks, while others are inherently more biologically-focused, such as the basic genetic programming module included in this project. Use it for whatever you like, and please contribute back to the project by cleaning up code that is here or contributing new code for applications in biology that you may find interesting to program. NOTE: The code is currently messy in some places. If you want to make a Pull Request by tidying up the code, that would certainly be merged. Since most of this was written while in the middle of our (jaredmichaelsmith and davpcunn) Graduate Bio-Inspired computation course, there are some places where the code has diverged into a dark chasm of non-pythonic mess, despite the algorithms still performing very well. Contributions in this area are much appreciated! -####Dependencies: +#### Dependencies: ---- - NumPy - SciPy - Scikit-Learn - Matplotlib -####Categories +#### Categories ---- Below you will find several categories of applications in this project. -#####Neural Networks: +##### Neural Networks: ---- - Hopfield Neural Network - Back Propagation Neural Network @@ -32,7 +32,7 @@ Below you will find several categories of applications in this project. - Labelled Faces in the Wild Dataset: http://vis-www.cs.umass.edu/lfw/ - From scikit-learn package, originally collected by the University of Mass. Amherst -#####Genetic Programming: +##### Genetic Programming: ---- - Basic Genetic Computation Algorithm - Features: @@ -40,9 +40,8 @@ Below you will find several categories of applications in this project. - crossover and mutation of genes - learning ability for offspring of each generation -#####Self-Organization and Autonomous Agents +##### Self-Organization and Autonomous Agents ---- - Particle Swarm Optimization - Features: - - You can configure many of the parameters of the algorithm such as the velocity, acceleration, and number of initial particles, as well as several other parameters. - + - You can configure many of the parameters of the algorithm such as the velocity, acceleration, and number of initial particles, as well as several other parameters. \ No newline at end of file diff --git a/selforganization/README.md b/selforganization/README.md index 6bbcc95..d84fc80 100644 --- a/selforganization/README.md +++ b/selforganization/README.md @@ -1,4 +1,19 @@ Self-Organization ==== -- Author: David Cunningham -- Particle Swarm Optimization +Code Contributor : David Cunningham +- [Particle Swarm Optimization](https://ieeexplore.ieee.org/document/488968) + Authors: James Kennedy and Russell Eberhart + +Particle Swarm Optimization (PSO) is a population-based optimization algorithm inspired by the social behavior of bird flocking or fish schooling. It was originally proposed by Kennedy and Eberhart in 1995. The algorithm simulates the movement and interaction of a group of particles in a search space to find the optimal solution to a given optimization problem. + +In PSO, each particle represents a potential solution to the problem and moves through the search space by adjusting its position and velocity. The particles "swarm" towards better regions of the search space based on their own experience and the collective knowledge of the swarm. The algorithm iteratively updates the particles' positions and velocities using two main components: personal best (pbest) and global best (gbest). + +The personal best represents the best position that a particle has found so far in its search history. It is the position where the particle achieved its best objective function value. The global best represents the best position found by any particle in the entire swarm. It represents the overall best solution discovered by the swarm. + +During each iteration of the algorithm, particles update their velocities and positions based on their current positions, velocities, pbest, and gbest. The update formula takes into account the particle's previous velocity, its attraction towards its pbest, and its attraction towards the gbest. By adjusting their velocities and positions, particles explore the search space and converge towards promising regions that are likely to contain the optimal solution. + +PSO is known for its simplicity and ease of implementation. It has been successfully applied to a wide range of optimization problems, including continuous, discrete, and combinatorial problems. It is particularly effective in solving problems with non-linear and non-convex objective functions, where traditional optimization techniques may struggle. + +However, PSO also has some limitations. It can suffer from premature convergence, where the swarm gets trapped in a suboptimal solution and fails to explore other promising regions. Various modifications and variants of PSO have been proposed to mitigate this issue, such as using adaptive parameters, introducing diversity maintenance mechanisms, or incorporating problem-specific knowledge. + +Overall, PSO is a powerful and versatile optimization algorithm that leverages the collective intelligence of a swarm to efficiently explore and exploit the search space. It has found applications in various domains, including engineering, finance, data mining, and machine learning. diff --git a/selforganization/pso/Problems.py b/selforganization/pso/Problems.py index dd719f8..e7222b7 100644 --- a/selforganization/pso/Problems.py +++ b/selforganization/pso/Problems.py @@ -1,18 +1,88 @@ import math + def mdist(maxX, maxY): - return math.sqrt(maxX**2 + maxY**2/2) + ''' + Calculate the Manhattan distance from the origin to the maximum coordinates. + + Parameters: + maxX (float): Maximum X-coordinate. + maxY (float): Maximum Y-coordinate. + + Returns: + float: Manhattan distance. + + Example: + mdist(5, 8) # Returns: 9.899494936611665 + ''' + return math.sqrt(maxX ** 2 + maxY ** 2 / 2) + def pdist(px, py): - return math.sqrt((px-20)**2 + (py-7)**2) + ''' + Calculate the Euclidean distance between the given point and (20, 7). + + Parameters: + px (float): X-coordinate of the point. + py (float): Y-coordinate of the point. + + Returns: + float: Euclidean distance. + + Example: + pdist(10, 3) # Returns: 12.083045973594572 + ''' + return math.sqrt((px - 20) ** 2 + (py - 7) ** 2) + def ndist(px, py): - return math.sqrt((px+20)**2 + (py+7)**2) + ''' + Calculate the Euclidean distance between the given point and (-20, -7). + + Parameters: + px (float): X-coordinate of the point. + py (float): Y-coordinate of the point. + + Returns: + float: Euclidean distance. + + Example: + ndist(-10, -3) # Returns: 12.083045973594572 + ''' + return math.sqrt((px + 20) ** 2 + (py + 7) ** 2) + def Problem1(pos, maxes): - return 100*(1-pdist(pos[0], pos[1])/mdist(maxes[0], maxes[1])) + ''' + Solve Problem 1 based on the given position and maximum coordinates. + + Parameters: + pos (list): List containing X and Y coordinates of the position. + maxes (list): List containing maximum X and Y coordinates. + + Returns: + float: Solution to Problem 1. + + Example: + Problem1([10, 5], [5, 8]) # Returns: 100.0 + ''' + return 100 * (1 - pdist(pos[0], pos[1]) / mdist(maxes[0], maxes[1])) + def Problem2(pos, maxes): - pd = pdist(pos[0], pos[1]) - nd = ndist(pos[0], pos[1]) - md = mdist(maxes[0], maxes[1]) - - ret = 9*max(0, 10-pd**2) - ret+=10*(1-pd/md) - ret+=70*(1-nd/md) - return ret \ No newline at end of file + ''' + Solve Problem 2 based on the given position and maximum coordinates. + + Parameters: + pos (list): List containing X and Y coordinates of the position. + maxes (list): List containing maximum X and Y coordinates. + + Returns: + float: Solution to Problem 2. + + Example: + Problem2([10, 5], [5, 8]) # Returns: 89.89827550319647 + ''' + pd = pdist(pos[0], pos[1]) + nd = ndist(pos[0], pos[1]) + md = mdist(maxes[0], maxes[1]) + + ret = 9 * max(0, 10 - pd ** 2) + ret += 10 * (1 - pd / md) + ret += 70 * (1 - nd / md) + return ret